Solving and learning nonlinear PDEs with Gaussian processes
نویسندگان
چکیده
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations (PDEs), inverse problems (IPs) involving the identification of parameters in PDEs, using Gaussian processes. The proposed approach: (1) provides natural generalization collocation kernel methods to PDEs IPs; (2) has guaranteed convergence very general class comes equipped with path compute error bounds specific PDE approximations; (3) inherits state-of-the-art computational complexity linear solvers dense matrices. main idea our method is approximate solution given as maximum posteriori (MAP) estimator process conditioned on at finite number points. Although this optimization problem infinite-dimensional, it can be reduced finite-dimensional one by introducing additional variables corresponding values derivatives points; generalizes representer theorem arising regression. form quadratic objective function subject constraints; solved variant Gauss–Newton method. resulting algorithm (a) interpreted successive linearizations PDE, (b) practice found converge small iterations (2 10), wide range PDEs. Most traditional approaches IPs interleave parameter updates numerical PDE; solves both simultaneously. Experiments elliptic Burgers' equation, regularized Eikonal an IP permeability Darcy flow illustrate efficacy scope framework.
منابع مشابه
Nonlinear Inverse Reinforcement Learning with Gaussian Processes
We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonli...
متن کاملRelational Learning with Gaussian Processes
Correlation between instances is often modelled via a kernel function using input attributes of the instances. Relational knowledge can further reveal additional pairwise correlations between variables of interest. In this paper, we develop a class of models which incorporates both reciprocal relational information and input attributes using Gaussian process techniques. This approach provides a...
متن کاملSolving PDEs with Intrepid
Intrepid is a Trilinos package for advanced discretizations of Partial Differential Equations (PDEs). The package provides a comprehensive set of tools for local, cell-based construction of a wide range of numerical methods for PDEs. This paper describes the mathematical ideas and software design principles incorporated in the package. We also provide representative examples showcasing the use ...
متن کاملReinforcement learning with kernels and Gaussian processes
Kernel methods have become popular in many sub-fields of machine learning with the exception of reinforcement learning; they facilitate rich representations, and enable machine learning techniques to work in diverse input spaces. We describe a principled approach to the policy evaluation problem of reinforcement learning. We present a temporal difference (TD) learning using kernel functions. Ou...
متن کاملSolving PDEs with Hermite Interpolation
We examine the use of Hermite interpolation, that is interpolation using derivative data, in place of Lagrange interpolation to develop high-order PDE solvers. The fundamental properties of Hermite interpolation are recalled, with an emphasis on their smoothing effect and robust performance for nonsmooth functions. Examples from the CHIDES library are presented to illustrate the construction an...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational Physics
سال: 2021
ISSN: ['1090-2716', '0021-9991']
DOI: https://doi.org/10.1016/j.jcp.2021.110668